有效的有丝分裂定位是决定肿瘤预后和成绩的关键先驱任务。由于固有的域偏见,通过深度学习的图像分析通过深度学习图像分析的自动化检测通常会失败。本文提出了一个用于有丝分裂检测的域均质器,该域均质器试图通过输入图像的对抗重建来减轻组织学图像的领域差异。拟议的均质器基于U-NET架构,可以有效地减少组织学成像数据常见的域差异。我们通过观察预处理图像之间的域差异来证明我们的域均质器的有效性。使用此均匀剂,以及随后的视网膜网络检测器,我们能够以检测到的有丝分裂数字的平均精度来超越2021 MIDOG挑战的基准。
translated by 谷歌翻译
肺癌治疗中有针对性疗法的标准诊断程序涉及组织学亚型和随后检测关键驱动因素突变,例如EGFR。即使分子分析可以发现驱动器突变,但该过程通常很昂贵且耗时。深度学习的图像分析为直接从整个幻灯片图像(WSIS)直接发现驱动器突变提供了一种更经济的替代方法。在这项工作中,我们使用具有弱监督的自定义深度学习管道来鉴定苏木精和曙红染色的WSI的EGFR突变的形态相关性,此外还可以检测到肿瘤和组织学亚型。我们通过对两个肺癌数据集进行严格的实验和消融研究来证明管道的有效性-TCGA和来自印度的私人数据集。通过管道,我们在肿瘤检测下达到了曲线(AUC)的平均面积(AUC),在TCGA数据集上的腺癌和鳞状细胞癌之间的组织学亚型为0.942。对于EGFR检测,我们在TCGA数据集上的平均AUC为0.864,印度数据集的平均AUC为0.783。我们的关键学习点包括以下内容。首先,如果要在目标数据集中微调特征提取器,则使用对组织学训练的特征提取器层没有特别的优势。其次,选择具有较高细胞的斑块,大概是捕获肿瘤区域,并不总是有帮助的,因为疾病类别的迹象可能存在于肿瘤 - 肿瘤的基质中。
translated by 谷歌翻译
准确且具有成本效益的水体映射对环境的理解和导航具有巨大的意义。但是,我们从此类环境特征中获得的信息数量和质量受到各种因素的限制,包括成本,时间,安全性以及现有数据收集技术的功能。水深度的测量是此类映射的重要组成部分,尤其是在可以提供导航风险或具有重要生态功能的浅层地点。例如,由于暴风雨和侵蚀,这些位置的侵蚀和沉积会导致需要重复测量的快速变化。在本文中,我们描述了使用侧扫声纳的低成本,弹性,无人自主的表面车辆用于测深的数据收集。我们讨论了用于收集导航,控制和测深数据的设备和传感器的适应,还概述了车辆设置。这款自动表面车辆已用于从印度孟买的Powai湖收集测深。
translated by 谷歌翻译
发现表面电阻率的传统调查方法是耗时的和劳动量的。很少有研究重点是使用遥感数据和深度学习技术找到电阻率/电导率。在这一工作中,我们通过应用各种深度学习方法评估了表面电阻率和合成孔径雷达(SAR)之间的相关性,并在美国Coso地热区域中测试了我们的假设。为了检测电阻率,使用了UAVSAR获得的L波段全偏光SAR数据,并将MT(MagnEtoteltolarics)反向电阻率数据用作地面真相。我们进行了实验,以比较各种深度学习体系结构,并建议使用双输入UNET(DI-UNET)体系结构。 Di-Unet使用深度学习架构使用完整的极化SAR数据来预测电阻率,并承诺对传统方法进行快速调查。我们提出的方法实现了从SAR数据中映射MT电阻率的结果。
translated by 谷歌翻译
尽管U-NET体系结构已广泛用于分割医学图像,但我们解决了这项工作中的两个缺点。首先,当分割目标区域的形状和尺寸显着变化时,香草U-NET的精度会降低。即使U-NET已经具有在各种尺度上分析特征的能力,我们建议在U-NET编码器的每个卷积模块中明确添加多尺度特征图,以改善组织学图像的分割。其次,当监督学习的注释嘈杂或不完整时,U-NET模型的准确性也会受到影响。由于人类专家在非常精确,准确地识别和描述所有特定病理的所有实例的固有困难,因此可能发生这种情况。我们通过引入辅助信心图来应对这一挑战,该辅助信心图较少强调给定目标区域的边界。此外,我们利用深网的引导属性智能地解决了丢失的注释问题。在我们对乳腺癌淋巴结私有数据集的实验中,主要任务是分割生发中心和窦性组织细胞增多症,我们观察到了基于两个提出的增强的U-NET基线的显着改善。
translated by 谷歌翻译
Most cross-domain unsupervised Video Anomaly Detection (VAD) works assume that at least few task-relevant target domain training data are available for adaptation from the source to the target domain. However, this requires laborious model-tuning by the end-user who may prefer to have a system that works ``out-of-the-box." To address such practical scenarios, we identify a novel target domain (inference-time) VAD task where no target domain training data are available. To this end, we propose a new `Zero-shot Cross-domain Video Anomaly Detection (zxvad)' framework that includes a future-frame prediction generative model setup. Different from prior future-frame prediction models, our model uses a novel Normalcy Classifier module to learn the features of normal event videos by learning how such features are different ``relatively" to features in pseudo-abnormal examples. A novel Untrained Convolutional Neural Network based Anomaly Synthesis module crafts these pseudo-abnormal examples by adding foreign objects in normal video frames with no extra training cost. With our novel relative normalcy feature learning strategy, zxvad generalizes and learns to distinguish between normal and abnormal frames in a new target domain without adaptation during inference. Through evaluations on common datasets, we show that zxvad outperforms the state-of-the-art (SOTA), regardless of whether task-relevant (i.e., VAD) source training data are available or not. Lastly, zxvad also beats the SOTA methods in inference-time efficiency metrics including the model size, total parameters, GPU energy consumption, and GMACs.
translated by 谷歌翻译
Transformer layers, which use an alternating pattern of multi-head attention and multi-layer perceptron (MLP) layers, provide an effective tool for a variety of machine learning problems. As the transformer layers use residual connections to avoid the problem of vanishing gradients, they can be viewed as the numerical integration of a differential equation. In this extended abstract, we build upon this connection and propose a modification of the internal architecture of a transformer layer. The proposed model places the multi-head attention sublayer and the MLP sublayer parallel to each other. Our experiments show that this simple modification improves the performance of transformer networks in multiple tasks. Moreover, for the image classification task, we show that using neural ODE solvers with a sophisticated integration scheme further improves performance.
translated by 谷歌翻译
Image segmentation is a fundamental task in computer vision. Data annotation for training supervised methods can be labor-intensive, motivating unsupervised methods. Some existing approaches extract deep features from pre-trained networks and build a graph to apply classical clustering methods (e.g., $k$-means and normalized-cuts) as a post-processing stage. These techniques reduce the high-dimensional information encoded in the features to pair-wise scalar affinities. In this work, we replace classical clustering algorithms with a lightweight Graph Neural Network (GNN) trained to achieve the same clustering objective function. However, in contrast to existing approaches, we feed the GNN not only the pair-wise affinities between local image features but also the raw features themselves. Maintaining this connection between the raw feature and the clustering goal allows to perform part semantic segmentation implicitly, without requiring additional post-processing steps. We demonstrate how classical clustering objectives can be formulated as self-supervised loss functions for training our image segmentation GNN. Additionally, we use the Correlation-Clustering (CC) objective to perform clustering without defining the number of clusters ($k$-less clustering). We apply the proposed method for object localization, segmentation, and semantic part segmentation tasks, surpassing state-of-the-art performance on multiple benchmarks.
translated by 谷歌翻译
In object detection, post-processing methods like Non-maximum Suppression (NMS) are widely used. NMS can substantially reduce the number of false positive detections but may still keep some detections with low objectness scores. In order to find the exact number of objects and their labels in the image, we propose a post processing method called Detection Selection Algorithm (DSA) which is used after NMS or related methods. DSA greedily selects a subset of detected bounding boxes, together with full object reconstructions that give the interpretation of the whole image with highest likelihood, taking into account object occlusions. The algorithm consists of four components. First, we add an occlusion branch to Faster R-CNN to obtain occlusion relationships between objects. Second, we develop a single reconstruction algorithm which can reconstruct the whole appearance of an object given its visible part, based on the optimization of latent variables of a trained generative network which we call the decoder. Third, we propose a whole reconstruction algorithm which generates the joint reconstruction of all objects in a hypothesized interpretation, taking into account occlusion ordering. Finally we propose a greedy algorithm that incrementally adds or removes detections from a list to maximize the likelihood of the corresponding interpretation. DSA with NMS or Soft-NMS can achieve better results than NMS or Soft-NMS themselves, as is illustrated in our experiments on synthetic images with mutiple 3d objects.
translated by 谷歌翻译
Out-of-distribution (OOD) detection has attracted a large amount of attention from the machine learning research community in recent years due to its importance in deployed systems. Most of the previous studies focused on the detection of OOD samples in the multi-class classification task. However, OOD detection in the multi-label classification task remains an underexplored domain. In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task. Object detection models have an inherent ability to distinguish between objects of interest (in-distribution) and irrelevant objects (e.g., OOD objects) on images that contain multiple objects from different categories. These abilities allow us to convert a regular object detection model into an image classifier with inherent OOD detection capabilities with just minor changes. We compare our approach to state-of-the-art OOD detection methods and demonstrate YolOOD's ability to outperform these methods on a comprehensive suite of in-distribution and OOD benchmark datasets.
translated by 谷歌翻译